Goto

Collaborating Authors

 monte carlo



Langevin Quasi-Monte Carlo

Neural Information Processing Systems

Sampling from probability distributions is a crucial task in both statistics and machine learning. However, when the target distribution does not permit exact sampling, researchers often rely on Markov chain Monte Carlo (MCMC) methods.



Appendix for Bayesian Active Causal Discovery with Multi-Fidelity Experiments Anonymous Author(s) Affiliation Address email

Neural Information Processing Systems

Then, we intend to calculate the constraint part. The algorithm for Licence method for single-target interventiion scenario is shown in Algorithm 1. The details of experimental baselines are demonstrated as follows. AIT [11] is an active learning method that utilize f-score to select intervention queries. REAL fidelity means the model always choose the highest fidelity to conduct experiments.





Multilevel and Sequential Monte Carlo for Training-Free Diffusion Guidance

Gleich, Aidan, Schmidler, Scott C.

arXiv.org Machine Learning

We address the problem of accurate, training-free guidance for conditional generation in trained diffusion models. Existing methods typically rely on point-estimates to approximate the posterior score, often resulting in biased approximations that fail to capture multimodality inherent to the reverse process of diffusion models. We propose a sequential Monte Carlo (SMC) framework that constructs an unbiased estimator of $p_θ(y|x_t)$ by integrating over the full denoising distribution via Monte Carlo approximation. To ensure computational tractability, we incorporate variance-reduction schemes based on Multi-Level Monte Carlo (MLMC). Our approach achieves new state-of-the-art results for training-free guidance on CIFAR-10 class-conditional generation, achieving $95.6\%$ accuracy with $3\times$ lower cost-per-success than baselines. On ImageNet, our algorithm achieves $1.5\times$ cost-per-success advantage over existing methods.


Multi-level Monte Carlo Dropout for Efficient Uncertainty Quantification

Pim, Aaron, Pryer, Tristan

arXiv.org Machine Learning

We develop a multilevel Monte Carlo (MLMC) framework for uncertainty quantification with Monte Carlo dropout. Treating dropout masks as a source of epistemic randomness, we define a fidelity hierarchy by the number of stochastic forward passes used to estimate predictive moments. We construct coupled coarse--fine estimators by reusing dropout masks across fidelities, yielding telescoping MLMC estimators for both predictive means and predictive variances that remain unbiased for the corresponding dropout-induced quantities while reducing sampling variance at fixed evaluation budget. We derive explicit bias, variance and effective cost expressions, together with sample-allocation rules across levels. Numerical experiments on forward and inverse PINNs--Uzawa benchmarks confirm the predicted variance rates and demonstrate efficiency gains over single-level MC-dropout at matched cost.


Sampling via Stochastic Interpolants by Langevin-based Velocity and Initialization Estimation in Flow ODEs

Duan, Chenguang, Jiao, Yuling, Steidl, Gabriele, Wald, Christian, Yang, Jerry Zhijian, Zhang, Ruizhe

arXiv.org Machine Learning

We propose a novel method for sampling from unnormalized Boltzmann densities based on a probability-flow ordinary differential equation (ODE) derived from linear stochastic interpolants. The key innovation of our approach is the use of a sequence of Langevin samplers to enable efficient simulation of the flow. Specifically, these Langevin samplers are employed (i) to generate samples from the interpolant distribution at intermediate times and (ii) to construct, starting from these intermediate times, a robust estimator of the velocity field governing the flow ODE. For both applications of the Langevin diffusions, we establish convergence guarantees. Extensive numerical experiments demonstrate the efficiency of the proposed method on challenging multimodal distributions across a range of dimensions, as well as its effectiveness in Bayesian inference tasks.